The Shepard–Risset glissando: music that moves you
نویسندگان
چکیده
منابع مشابه
The glissando illusion and handedness.
This article reports the first study of the glissando illusion, which was created and published as a sound demonstration by Deutsch [Deutsch, D. (1995). Musical illusions and paradoxes. La Jolla: Philomel Records (compact disc)]. To experience the illusion, each subject was seated in front of two stereophonically separated loudspeakers, with one to his left and the other to his right. A sound p...
متن کاملXMAP215: A Tip Tracker that Really Moves
XMAP215 is a microtubule plus-end binding protein implicated in modulating microtubule dynamics. In this issue, Brouhard et al. (2008) propose a new mechanism to explain how XMAP215 promotes microtubule growth. They report that XMAP215 moves with the growing microtubule plus ends where it catalyzes the addition of tubulin subunits.
متن کاملWhat You See Is What You Get: on Visualizing Music
Though music is fundamentally an aural phenomenon, we often communicate about music through visual means. The paper examines a number of visualization techniques developed for music, focusing especially on those developed for music analysis by specialists in the field, but also looking at some less successful approaches. It is hoped that, by presenting them in this way, those in the MIR communi...
متن کاملYou Call That Singing? Ensemble Classification for Multi-Cultural Collections of Music Recordings
The wide range of vocal styles, musical textures and recording techniques found in ethnomusicological field recordings leads us to consider the problem of automatically labeling the content to know whether a recording is a song or instrumental work. Furthermore, if it is a song, we are interested in labeling aspects of the vocal texture: e.g. solo, choral, acapella or singing with instruments. ...
متن کاملYou said that?
We present a method for generating a video of a talking face. The method takes as inputs: (i) still images of the target face, and (ii) an audio speech segment; and outputs a video of the target face lip synched with the audio. The method runs in real time and is applicable to faces and audio not seen at training time. To achieve this we propose an encoder-decoder CNN model that uses a joint em...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Experimental Brain Research
سال: 2017
ISSN: 0014-4819,1432-1106
DOI: 10.1007/s00221-017-5033-1